45 research outputs found
Transfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation
Magnetic Resonance Imaging (MRI) is widely used in routine clinical diagnosis
and treatment. However, variations in MRI acquisition protocols result in
different appearances of normal and diseased tissue in the images.
Convolutional neural networks (CNNs), which have shown to be successful in many
medical image analysis tasks, are typically sensitive to the variations in
imaging protocols. Therefore, in many cases, networks trained on data acquired
with one MRI protocol, do not perform satisfactorily on data acquired with
different protocols. This limits the use of models trained with large annotated
legacy datasets on a new dataset with a different domain which is often a
recurring situation in clinical settings. In this study, we aim to answer the
following central questions regarding domain adaptation in medical image
analysis: Given a fitted legacy model, 1) How much data from the new domain is
required for a decent adaptation of the original network?; and, 2) What portion
of the pre-trained model parameters should be retrained given a certain number
of the new domain training samples? To address these questions, we conducted
extensive experiments in white matter hyperintensity segmentation task. We
trained a CNN on legacy MR images of brain and evaluated the performance of the
domain-adapted network on the same task with images from a different domain. We
then compared the performance of the model to the surrogate scenarios where
either the same trained network is used or a new network is trained from
scratch on the new dataset.The domain-adapted network tuned only by two
training examples achieved a Dice score of 0.63 substantially outperforming a
similar network trained on the same set of examples from scratch.Comment: 8 pages, 3 figure
Haptic interface control-design issues and experiments with a planar device
Describes the haptic rendering of a virtual environment by drawing upon concepts developed in the area of teleoperation. A four-channel teleoperation architecture is shown to be an effective means of coordinating the control of a 3-DOF haptic interface with the simulation of a virtual dynamic environmen
Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial
ABSTRACT Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a ±10° and ±10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data
A statistical atlas-based technique for automatic segmentation of the first Heschls gyrus in human auditory cortex from MR images
Abstract-We present an automatic method for the segmentation of the first transverse temporal gyrus of Heschl (HG), the morphological marker for primary auditory cortex in humans. The proposed technique utilizes a statistical anatomical atlas of the gyrus, generated from a set of training samples using principal component analysis. The training set consists of MRI data from 12 subjects with the corresponding Heschl's gyri manually labeled in each hemisphere (separate atlases were generated for each hemisphere). We used a leave-oneout approach to automatically segment Heschl's gyri in both hemispheres from the MR image data using generated atlases. We assessed the accuracy of this atlas-based technique by using it to segment the HG region from several test cases and finding the overlap between the segmented and labeled HG regions. Results demonstrated more than 75% and 83% accuracy in the extraction of the HG volumes in the left and right hemispheres, respectively. It is expected that the proposed tool can be adapted to extract other anatomical regions in the brain
Recommended from our members
A multi-center milestone study of clinical vertebral CT segmentation
A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention
Models of Temporal Enhanced Ultrasound Data for Prostate Cancer Diagnosis: The Impact of Time-Series Order
Recent studies have shown the value of Temporal Enhanced Ultrasound (TeUS) imaging for tissue characterization in transrectal ultrasound-guided prostate biopsies. Here, we present results of experiments designed to study the impact of temporal order of the data in TeUS signals. We assess the impact of variations in temporal order on the ability to automatically distinguish benign prostate-tissue from malignant tissue. We have previously used Hidden Markov Models (HMMs) to model TeUS data, as HMMs capture temporal order in time series. In the work presented here, we use HMMs to model malignant and benign tissues; the models are trained and tested on TeUS signals while introducing variation to their temporal order. We first model the signals in their original temporal order, followed by modeling the same signals under various time rearrangements. We compare the performance of these models for tissue characterization. Our results show that models trained over the original order-preserving signals perform statistically significantly better for distinguishing between malignant and benign tissues, than those trained on rearranged signals. The performance degrades as the amount of temporal-variation increases. Specifically, accuracy of tissue characterization decreases from 85% using models trained on original signals to 62% using models trained and tested on signals that are completely temporally-rearranged. These results indicate the importance of order in characterization of tissue malignancy from TeUS data
Deep Learning for Detection and Localization of B-Lines in Lung Ultrasound
Lung ultrasound (LUS) is an important imaging modality used by emergency
physicians to assess pulmonary congestion at the patient bedside. B-line
artifacts in LUS videos are key findings associated with pulmonary congestion.
Not only can the interpretation of LUS be challenging for novice operators, but
visual quantification of B-lines remains subject to observer variability. In
this work, we investigate the strengths and weaknesses of multiple deep
learning approaches for automated B-line detection and localization in LUS
videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising
1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines.
Based on this dataset, we present a benchmark of established deep learning
methods applied to the task of B-line detection. To pave the way for
interpretable quantification of B-lines, we propose a novel "single-point"
approach to B-line localization using only the point of origin. Our results
show that (a) the area under the receiver operating characteristic curve ranges
from 0.864 to 0.955 for the benchmarked detection methods, (b) within this
range, the best performance is achieved by models that leverage multiple
successive frames as input, and (c) the proposed single-point approach for
B-line localization reaches an F1-score of 0.65, performing on par with the
inter-observer agreement. The dataset and developed methods can facilitate
further biomedical research on automated interpretation of lung ultrasound with
the potential to expand the clinical utility.Comment: 10 pages, 4 figure
A coarse-to-fine approach to prostate boundary segmentation in ultrasound images
BACKGROUND: In this paper a novel method for prostate segmentation in transrectal ultrasound images is presented. METHODS: A segmentation procedure consisting of four main stages is proposed. In the first stage, a locally adaptive contrast enhancement method is used to generate a well-contrasted image. In the second stage, this enhanced image is thresholded to extract an area containing the prostate (or large portions of it). Morphological operators are then applied to obtain a point inside of this area. Afterwards, a Kalman estimator is employed to distinguish the boundary from irrelevant parts (usually caused by shadow) and generate a coarsely segmented version of the prostate. In the third stage, dilation and erosion operators are applied to extract outer and inner boundaries from the coarsely estimated version. Consequently, fuzzy membership functions describing regional and gray-level information are employed to selectively enhance the contrast within the prostate region. In the last stage, the prostate boundary is extracted using strong edges obtained from selectively enhanced image and information from the vicinity of the coarse estimation. RESULTS: A total average similarity of 98.76%(± 0.68) with gold standards was achieved. CONCLUSION: The proposed approach represents a robust and accurate approach to prostate segmentation
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI